混合动力和端到端(E2E)自动语音识别(ASR)系统之间的基本建模差异在其中创造了巨大的多样性和互补性。本文研究了混合TDNN和构型E2E ASR系统的基于多通的逆转和交叉适应系统组合方法。在多通恢复中,最先进的混合动力LF-MMI训练有素的CNN-TDNN系统具有速度扰动,规格和贝叶斯学习隐藏单元供款(LHUC)扬声器的适应器,以在被恢复之前产生初始的N-tesk输出由扬声器适应构象异构体系统,使用2向跨系统得分插值。在交叉适应中,混合CNN-TDNN系统适用于构象异构体系统的1好的输出,反之亦然。在300小时的总机语料库上进行的实验表明,使用两种系统组合方法中的任何一个得出的组合系统都超过了单个系统。在NIST HUB5'00,RT03和RT03和RT02评估数据。
translated by 谷歌翻译
在计算机视觉中,构建实例检测模型并可以处理稀有对象类别是一个重要的挑战。但是,数据收集方法和指标缺乏对使用神经网络应用实际场景应用的研究。在这里,我们对对象遮挡数据收集和增强方法进行系统研究,在该方法中,我们模仿目标场景中的对象遮挡关系。但是,我们发现对象阻塞的简单机制足够好,并且可以在添加新类别的实际场景中提供可接受的准确性。我们幻想的是,只有数百个类别的50万培训数据集中添加15张新类别的图像,可以在看不见的测试数据集中提供这个新类别95%的准确性,其中包括数千张此类别的图像。
translated by 谷歌翻译
3D对象检测是安全关键型机器人应用(如自主驾驶)的关键模块。对于这些应用,我们最关心检测如何影响自我代理人的行为和安全性(Egocentric观点)。直观地,当它更有可能干扰自我代理商的运动轨迹时,我们寻求更准确的对象几何描述。然而,基于箱交叉口(IOU)的电流检测指标是以对象为中心的,并且不设计用于捕获物体和自助代理之间的时空关系。为了解决这个问题,我们提出了一种新的EnoCentric测量来评估3D对象检测,即支持距离误差(SDE)。我们基于SDE的分析显示,EPECENTIC检测质量由边界框的粗糙几何形状界定。鉴于SDE将从更准确的几何描述中受益的洞察力,我们建议将物体代表为Amodal轮廓,特别是Amodal星形多边形,并设计简单的模型,椋鸟,预测这种轮廓。我们对大型Waymo公开数据集的实验表明,与IOU相比,SDE更好地反映了检测质量对自我代理人安全的影响;恒星的估计轮廓始终如一地改善最近的3D对象探测器的Enocentric检测质量。
translated by 谷歌翻译
我们解决了从由一个未知照明条件照射的物体的多视图图像(及其相机姿势)从多视图图像(和它们的相机姿势)恢复物体的形状和空间变化的空间变化的问题。这使得能够在任意环境照明下呈现对象的新颖视图和对象的材料属性的编辑。我们呼叫神经辐射分解(NERFVERTOR)的方法的关键是蒸馏神经辐射场(NERF)的体积几何形状[MILDENHALL等人。 2020]将物体表示为表面表示,然后在求解空间改变的反射率和环境照明时共同细化几何形状。具体而言,Nerfactor仅使用重新渲染丢失,简单的光滑度Provers以及从真实学中学到的数据驱动的BRDF而无任何监督的表面法线,光可视性,Albedo和双向反射率和双向反射分布函数(BRDF)的3D神经领域-world brdf测量。通过显式建模光可视性,心脏请能够将来自Albedo的阴影分离,并在任意照明条件下合成现实的软或硬阴影。 Nerfactor能够在这场具有挑战性和实际场景的挑战和捕获的捕获设置中恢复令人信服的3D模型进行令人满意的3D模型。定性和定量实验表明,在各种任务中,内容越优于基于经典和基于深度的学习状态。我们的视频,代码和数据可在peoptom.csail.mit.edu/xiuming/projects/nerfactor/上获得。
translated by 谷歌翻译
我们为3D点云提出了一种自我监督的胶囊架构。我们通过置换等级的注意力计算对象的胶囊分解,并通过用对随机旋转对象的对进行自我监督处理。我们的主要思想是将注意力掩码汇总为语义关键点,并使用这些来监督满足胶囊不变性/设备的分解。这不仅能够培训语义一致的分解,而且还允许我们学习一个能够以对客观的推理的规范化操作。培训我们的神经网络,我们既不需要分类标签也没有手动对齐训练数据集。然而,通过以自我监督方式学习以对象形式的表示,我们的方法在3D点云重建,规范化和无监督的分类上表现出最先进的。
translated by 谷歌翻译
We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions. Our method represents the scene as a continuous volumetric function parameterized as MLPs whose inputs are a 3D location and whose outputs are the following scene properties at that input location: volume density, surface normal, material parameters, distance to the first surface intersection in any direction, and visibility of the external environment in any direction. Together, these allow us to render novel views of the object under arbitrary lighting, including indirect illumination effects. The predicted visibility and surface intersection fields are critical to our model's ability to simulate direct and indirect illumination during training, because the brute-force techniques used by prior work are intractable for lighting conditions outside of controlled setups with a single light. Our method outperforms alternative approaches for recovering relightable 3D scene representations, and performs well in complex lighting settings that have posed a significant challenge to prior work.
translated by 谷歌翻译
有效地表示人体诸如人体之类的铰接物体是计算机视觉和图形中的重要问题。为了有效地模拟变形,现有方法使用多边形网格表示3D对象,并使用皮肤技术变形。本文介绍了神经表达的形状近似(NASA),这是一种替代框架,可以使用以姿势调节的神经指示函数有效地表示明显的可变形物体。使用NASA进行的占用测试是直接的,可以规定网格的复杂性和水紧身问题。我们证明了NASA对3D跟踪应用的有效性,并讨论了其他潜在扩展。
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
Recently, great progress has been made in single-image super-resolution (SISR) based on deep learning technology. However, the existing methods usually require a large computational cost. Meanwhile, the activation function will cause some features of the intermediate layer to be lost. Therefore, it is a challenge to make the model lightweight while reducing the impact of intermediate feature loss on the reconstruction quality. In this paper, we propose a Feature Interaction Weighted Hybrid Network (FIWHN) to alleviate the above problem. Specifically, FIWHN consists of a series of novel Wide-residual Distillation Interaction Blocks (WDIB) as the backbone, where every third WDIBs form a Feature shuffle Weighted Group (FSWG) by mutual information mixing and fusion. In addition, to mitigate the adverse effects of intermediate feature loss on the reconstruction results, we introduced a well-designed Wide Convolutional Residual Weighting (WCRW) and Wide Identical Residual Weighting (WIRW) units in WDIB, and effectively cross-fused features of different finenesses through a Wide-residual Distillation Connection (WRDC) framework and a Self-Calibrating Fusion (SCF) unit. Finally, to complement the global features lacking in the CNN model, we introduced the Transformer into our model and explored a new way of combining the CNN and Transformer. Extensive quantitative and qualitative experiments on low-level and high-level tasks show that our proposed FIWHN can achieve a good balance between performance and efficiency, and is more conducive to downstream tasks to solve problems in low-pixel scenarios.
translated by 谷歌翻译